Engineers Guide to Using AI
Thinking With, Not Thinking For
An Engineer’s Guide to Using AI Without Losing Your Mind
Joseph P. McFadden Sr.
with Claude (Anthropic) as Collaborator
Engineer | Educator | Author
www.McFaddenCAE.com
March 2026
Part of the Building Intuition Before Equations Series
The Shortcut Machine
There is a question I keep hearing from colleagues, students, and readers, and it usually sounds something like this: “Aren’t you just using AI to write your stuff?”
It is a fair question. The landscape right now is flooded with content that was generated by typing a prompt into a machine and posting whatever came back. You see it on LinkedIn, in student papers, in professional reports that read like they were assembled by committee and reviewed by nobody. The volume is enormous. The depth is shallow. And the people posting it often have no idea whether what they published is accurate, original, or even coherent beyond the surface.
This is not an AI problem. This is a human problem. Specifically, it is the same human problem I have been writing about for months: our brain’s relentless drive to minimize metabolic expenditure. AI, used carelessly, is the ultimate cognitive shortcut. And our evolutionary wiring loves shortcuts.
But there is another way to use these tools. A way that does not make you dumber. A way that, properly used, makes you sharper, more rigorous, and more honest in your thinking than you could be on your own.
This essay is about that difference.
A Question Worth Asking
Before we continue, I want you to take a moment and think: how often do you use copy and paste during a day?
Any thoughts on what that may actually be doing to your personal ability to think?
I have asked myself that very question, which is part of why I do what I do—generating essays and audiobooks to share what I have discovered while seeking to answer it. This has led to more and more questions and journeys. And that, I have come to realize, is the point.
Your Brain on Copy and Paste
To understand why the distinction matters, you need to understand what is happening inside your head when you use AI as a shortcut versus when you use it as a thinking partner.
Your brain consumes roughly twenty percent of your body’s energy on just two percent of its mass. Deep, analytical thinking—the kind that requires your prefrontal cortex to wrestle with ambiguity, evaluate evidence, and construct original arguments—is the most metabolically expensive thing your brain does. Evolution has spent hundreds of thousands of years optimizing your neural architecture to avoid exactly that kind of work whenever possible.
When you type a prompt into an AI system and post the output without engaging with it, your brain gets exactly what it wants: a result with almost zero cognitive expenditure. The amygdala is calm because there is no uncertainty to process. The hippocampus has nothing meaningful to encode because you did not struggle with the material. And the prefrontal cortex—the part of your brain responsible for critical thinking, judgment, and original insight—never had to show up.
This is the neurological equivalent of passive learning. And I have written at length about what passive learning does to the brain over time: it accelerates cognitive decline. The neural pathways you do not use get pruned. The circuits you do not challenge atrophy. The system that is not stressed does not strengthen. In engineering, we call this a material that has not been work-hardened. It looks fine sitting on a shelf. Put it under load, and it fails.
That is what copy-and-pasting AI output is doing to the people who rely on it. It looks productive. It feels efficient. And it is quietly weakening the very cognitive infrastructure they need to do meaningful work.
The Finite Element Analogy
In engineering, we have been using computational tools for decades. Finite element analysis, computational fluid dynamics, structural simulation software—these are enormously powerful tools that do calculations no human could perform by hand in a reasonable timeframe.
Nobody calls an engineer lazy for using finite element analysis. But nobody calls them an engineer if they cannot interpret the output.
The software runs the computation. The engineer decides what questions to ask, what boundary conditions to apply, what assumptions are baked into the model, and whether the results make physical sense. An experienced engineer looks at a stress contour plot and immediately recognizes when something is wrong—when the mesh is too coarse, when the boundary conditions are unrealistic, when the material model does not match the actual behavior. That recognition comes from years of building intuition through hands-on work with real materials and real failures.
A student who has never touched a failed component, never performed a root cause analysis, never held a fractured surface in their hands and asked “why did this break here and not there?” cannot do that. They can run the software. They can generate beautiful color plots. But they cannot tell you when the software is lying to them.
AI is no different. It is a computational tool of extraordinary power. But without the human judgment that comes from deep domain knowledge, original observation, and the hard-won intuition built through years of doing the actual work, it is just a machine generating plausible-sounding output that may or may not be true.
A Different Approach: AI as a Socratic Partner
My methodology is deliberate, and it begins long before any AI system is involved.
Step one is primary research. I read the literature. Not summaries, not abstracts, not someone else’s blog post about the research. The actual papers, the actual books, the actual primary sources. Jung, Nietzsche, Solzhenitsyn in philosophy. Peer-reviewed neuroscience on dendritic computation, synaptic plasticity, amygdala function. Evolutionary psychology. Thermodynamics. Michael Levin on bioelectricity. Penrose on consciousness. This takes time. It is supposed to take time.
Step two is incubation. I sit with what I have read. I let connections form on their own timeline—sometimes over days, sometimes in the shower, sometimes while driving. This is not passive. This is the brain doing its most important unconscious work: pattern recognition across domains. The engineering intuition that says “this pattern in brain behavior looks exactly like that pattern in material failure” does not come from a prompt. It comes from decades of working with both systems.
Step three is the Socratic dialogue. When an observation crystallizes—when I see a connection or a contradiction that I want to stress-test—I bring it to an AI partner. Not one partner. Multiple. I work across different AI systems deliberately because each one reasons differently, challenges differently, and finds different weaknesses in an argument. These are not brief exchanges. They are extended conversations, spanning hours over multiple days, where I present my thinking, push back on the analysis, ask for the strongest counterarguments, and refine my understanding through sustained intellectual pressure.
During this period, I often ask for and obtain new sources of primary material to read, always seeking the most up-to-date and peer-reviewed information. Not settled information—since as we know, nothing is truly settled. If it were, there would be no journey, or as I see it, no joy.
Step four is synthesis. I take the refined thinking and build it into something—an essay, an audiobook, a classroom exercise. The writing process itself reveals gaps in logic and forces further refinement. Then I test it again.
At no point in this process does an AI system generate my ideas. It challenges them. It deepens them. It finds holes in them. That is what a good graduate seminar does. That is what a brilliant colleague does when you present your work and they ask the question you were hoping nobody would ask.
A Framework for Everyone
You do not need four and a half decades of engineering experience to use AI this way. But you do need to adopt a few principles that go against every instinct your brain’s energy-conservation system is pushing you toward.
Bring something to the table. Before you open an AI tool, have a position, an observation, a question that is genuinely yours. “Write me an essay about X” is a shortcut. “I have noticed that X seems to follow the same pattern as Y—help me test whether that connection holds” is a collaboration.
Push back on the output. Never accept the first response as truth. Ask for counterarguments. Ask what evidence would disprove the claim. Ask what the response is leaving out. If you are not arguing with the AI at least some of the time, you are not using it properly.
Use multiple systems. Different AI models have different strengths, different blind spots, and different tendencies. Running your thinking through multiple partners is the intellectual equivalent of getting a second and third opinion. It is what any responsible engineer does before signing off on a design.
Do the metabolically expensive work yourself. The reading. The sitting with uncertainty. The forming of original observations. The writing. These are the activities that strengthen your prefrontal cortex, give your hippocampus something worth encoding, and build the intuition that allows you to recognize when an AI output is wrong. You cannot outsource this and retain the ability to think.
Be honest about the process. If you used AI in your work, say so. Describe how. Methodology matters. “Where did this come from?” is a legitimate question, and the answer should be transparent. Credibility is built on honesty, not on pretending you did everything alone.
Why This Matters
We are at an inflection point. AI tools are becoming more powerful, more accessible, and more embedded in every domain of professional and academic life. The question is not whether people will use them. They will. The question is whether they will use them in ways that make them stronger or weaker.
The neuroscience is clear. The brain that is not challenged declines. The neural pathways that are not used get pruned. The prefrontal cortex that is never asked to do hard, ambiguous, uncomfortable analytical work loses its capacity to do that work. This is not a metaphor. This is biology.
The parallel to my earlier work on passive learning and cognitive decline is direct. We spent decades building educational systems that made learning easier, more comfortable, and less demanding. The result was not smarter students. It was students who were less equipped to handle the complexity of real problems. AI, used as a shortcut, threatens to accelerate that same trajectory—not just in education but across every profession that requires original thought.
But AI used as a Socratic partner—as a tool that challenges, deepens, and stress-tests your own thinking—has the potential to be the most powerful intellectual amplifier ever built. The difference is entirely in the human’s approach.
The Driver and the Passenger
I have said it many times, and it applies here more than anywhere: be a driver, not a passenger.
When you let AI think for you, you are a passenger. You are being carried to a destination someone else chose by a route you did not examine in a vehicle you do not understand. You will arrive somewhere, but you will not know why you are there or whether it is where you should be.
When you use AI to think with you, you are driving. You chose the destination based on your own observations and experience. You are navigating based on your own understanding of the terrain. And when your co-pilot suggests a different route, you have the knowledge to evaluate whether that route makes sense or whether it leads off a cliff.
The tools are extraordinary. Use them. But bring your own mind to the table. Do the reading. Sit with the uncertainty. Form your own observations. And then test them relentlessly against the most powerful Socratic partners ever built.
That is not laziness. That is rigor. And in a world that is increasingly content to let machines do the thinking, rigor may be the most valuable skill you can develop.
—————
Be a driver, not a passenger.
Now go do the work.
Combating Engineering Mind Blindness, One Student at a Time.
Remember, every failure tells a story.
Joseph P. McFadden Sr.
Engineer • Lifelong Learner • Holistic Analyst
www.McFaddenCAE.com • McFadden@snet.net